Goto

Collaborating Authors

 generalized loss



A Appendix

Neural Information Processing Systems

The next theorem shows that this posterior measure is the solution to a certain optimization problem. P is dominated by null P . More details about the Wasserstein distance can be found in Chapter 7 of Ambrosio et al. [2005]. We will now discuss how to approximate each term in (46). We chose ϵ = 1% in our implementation.


A Generalized Weighted Loss for SVC and MLP

Portera, Filippo

arXiv.org Artificial Intelligence

Usually standard algorithms employ a loss where each error is the mere absolute difference between the true value and the prediction, in case of a regression task. In the present, we introduce several error weighting schemes that are a generalization of the consolidated routine. We study both a binary classification model for Support Vector Classification and a regression net for Multi-layer Perceptron. Results proves that the error is never worse than the standard procedure and several times it is better.


Generalized Variational Inference in Function Spaces: Gaussian Measures meet Bayesian Deep Learning

Wild, Veit D., Hu, Robert, Sejdinovic, Dino

arXiv.org Artificial Intelligence

We develop a framework for generalized variational inference in infinite-dimensional function spaces and use it to construct a method termed Gaussian Wasserstein inference (GWI). GWI leverages the Wasserstein distance between Gaussian measures on the Hilbert space of square-integrable functions in order to determine a variational posterior using a tractable optimisation criterion and avoids pathologies arising in standard variational function space inference. An exciting application of GWI is the ability to use deep neural networks in the variational parametrisation of GWI, combining their superior predictive performance with the principled uncertainty quantification analogous to that of Gaussian processes. The proposed method obtains state-of-the-art performance on several benchmark datasets.